2,507 research outputs found

    Simulation of Asynchronous Instruction Pipelines

    Get PDF
    This paper presents the ARAS simulator with which asynchronous instruction pipelines can be modelled, simulated and displayed. ARAS allows one to construct instruction pipelines by preparing various configuration files. Using these files and a number of benchmark programs, performance of the instruction pipelines can be obtained. The performance of asynchronous instruction pipelines can also be compared to synchronous case. Thus, one can decide the optimal design for instruction pipelines in asynchornous or synchronous cases and explore the deisng space of asynchronous instruction pipeline architectures

    Validation of a deep learning system for the detection of diabetic retinopathy in Indigenous Australians

    Get PDF
    BACKGROUND/AIMS: Deep learning systems (DLSs) for diabetic retinopathy (DR) detection show promising results but can underperform in racial and ethnic minority groups, therefore external validation within these populations is critical for health equity. This study evaluates the performance of a DLS for DR detection among Indigenous Australians, an understudied ethnic group who suffer disproportionately from DR-related blindness. METHODS: We performed a retrospective external validation study comparing the performance of a DLS against a retinal specialist for the detection of more-than-mild DR (mtmDR), vision-threatening DR (vtDR) and all-cause referable DR. The validation set consisted of 1682 consecutive, single-field, macula-centred retinal photographs from 864 patients with diabetes (mean age 54.9 years, 52.4% women) at an Indigenous primary care service in Perth, Australia. Three-person adjudication by a panel of specialists served as the reference standard. RESULTS: For mtmDR detection, sensitivity of the DLS was superior to the retina specialist (98.0% (95% CI, 96.5 to 99.4) vs 87.1% (95% CI, 83.6 to 90.6), McNemar's test p<0.001) with a small reduction in specificity (95.1% (95% CI, 93.6 to 96.4) vs 97.0% (95% CI, 95.9 to 98.0), p=0.006). For vtDR, the DLS's sensitivity was again superior to the human grader (96.2% (95% CI, 93.4 to 98.6) vs 84.4% (95% CI, 79.7 to 89.2), p<0.001) with a slight drop in specificity (95.8% (95% CI, 94.6 to 96.9) vs 97.8% (95% CI, 96.9 to 98.6), p=0.002). For all-cause referable DR, there was a substantial increase in sensitivity (93.7% (95% CI, 91.8 to 95.5) vs 74.4% (95% CI, 71.1 to 77.5), p<0.001) and a smaller reduction in specificity (91.7% (95% CI, 90.0 to 93.3) vs 96.3% (95% CI, 95.2 to 97.4), p<0.001). CONCLUSION: The DLS showed improved sensitivity and similar specificity compared with a retina specialist for DR detection. This demonstrates its potential to support DR screening among Indigenous Australians, an underserved population with a high burden of diabetic eye disease

    Capabilities of GPT-4 in ophthalmology: an analysis of model entropy and progress towards human-level medical question answering

    Get PDF
    Background: Evidence on the performance of Generative Pre-trained Transformer 4 (GPT-4), a large language model (LLM), in the ophthalmology question-answering domain is needed. // Methods: We tested GPT-4 on two 260-question multiple choice question sets from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and the OphthoQuestions question banks. We compared the accuracy of GPT-4 models with varying temperatures (creativity setting) and evaluated their responses in a subset of questions. We also compared the best-performing GPT-4 model to GPT-3.5 and to historical human performance. // Results: GPT-4–0.3 (GPT-4 with a temperature of 0.3) achieved the highest accuracy among GPT-4 models, with 75.8% on the BCSC set and 70.0% on the OphthoQuestions set. The combined accuracy was 72.9%, which represents an 18.3% raw improvement in accuracy compared with GPT-3.5 (p<0.001). Human graders preferred responses from models with a temperature higher than 0 (more creative). Exam section, question difficulty and cognitive level were all predictive of GPT-4-0.3 answer accuracy. GPT-4-0.3’s performance was numerically superior to human performance on the BCSC (75.8% vs 73.3%) and OphthoQuestions (70.0% vs 63.0%), but the difference was not statistically significant (p=0.55 and p=0.09). // Conclusion: GPT-4, an LLM trained on non-ophthalmology-specific data, performs significantly better than its predecessor on simulated ophthalmology board-style exams. Remarkably, its performance tended to be superior to historical human performance, but that difference was not statistically significant in our study

    Prevalence of diabetic retinopathy in Indigenous and non-Indigenous Australians: a systematic review and meta-analysis

    Get PDF
    TOPIC: This systematic review and meta-analysis summarises evidence relating to the prevalence of diabetic retinopathy (DR) among Indigenous and non-Indigenous Australians. CLINICAL RELEVANCE: Indigenous Australians suffer disproportionately from diabetes-related complications. Exploring ethnic variation in disease is important for equitable distribution of resources and may lead to identification of ethnic-specific modifiable risk factors. Existing DR prevalence studies comparing Indigenous and non-Indigenous Australians have shown conflicting results. METHODS: This study was conducted following Joanna Briggs Institute guidance on systematic reviews of prevalence studies (PROSPERO ID: CRD42022259048). We performed searches of Medline (Ovid), EMBASE, and Web of Science until October 2021, using a strategy designed by an information specialist. We included studies reporting DR prevalence among diabetic patients in Indigenous and non-Indigenous Australian populations. Two independent reviewers performed quality assessments using a 9-item appraisal tool. Meta-analysis and meta-regression were performed using double arcsine transformation and a random-effects model comparing Indigenous and non-Indigenous subgroups. RESULTS: Fifteen studies with 8219 participants met criteria for inclusion. The Indigenous subgroup scored lower on the appraisal tool compared to the non-Indigenous subgroup (mean score 50% vs 72%, p=0.04). In the unadjusted meta-analysis, DR prevalence in the Indigenous subgroup (30.2% [95%CI: 24.9-25.7]) did not differ significantly (p=0.17) from the non-Indigenous subgroup (23.7% (95%CI: 16.8-31.4]). After adjusting for age and for quality, DR prevalence was higher in the Indigenous subgroup (p-values<0.01), with prevalence ratio point estimates ranging between 1.72-2.58, depending on the meta-regression model. For the secondary outcomes, prevalence estimates were higher in the Indigenous subgroup for diabetic macular oedema (8.7% vs 2.7%, p=0.02) and vision-threatening DR (8.6% vs 3.0%, p=0.03), but not for proliferative DR (2.5% vs 0.8%, p=0.07). CONCLUSION: Indigenous studies scored lower for methodological quality, raising the possibility that systematic differences in research practices may be leading to underestimation of disease burden. After adjusting for age and for quality, we found a higher DR prevalence in the Indigenous subgroup. This contrasts with a previous review which reported the opposite finding of lower DR prevalence using unadjusted pooled estimates. Future epidemiological work exploring DR burden in Indigenous communities should aim to address methodological weaknesses identified by this review

    AutoMorph: Automated Retinal Vascular Morphology Quantification Via a Deep Learning Pipeline

    Get PDF
    Purpose: To externally validate a deep learning pipeline (AutoMorph) for automated analysis of retinal vascular morphology on fundus photographs. AutoMorph has been made publicly available, facilitating widespread research in ophthalmic and systemic diseases. Methods: AutoMorph consists of four functional modules: image preprocessing, image quality grading, anatomical segmentation (including binary vessel, artery/vein, and optic disc/cup segmentation), and vascular morphology feature measurement. Image quality grading and anatomical segmentation use the most recent deep learning techniques. We employ a model ensemble strategy to achieve robust results and analyze the prediction confidence to rectify false gradable cases in image quality grading. We externally validate the performance of each module on several independent publicly available datasets. Results: The EfficientNet-b4 architecture used in the image grading module achieves performance comparable to that of the state of the art for EyePACS-Q, with an F1-score of 0.86. The confidence analysis reduces the number of images incorrectly assessed as gradable by 76%. Binary vessel segmentation achieves an F1-score of 0.73 on AV-WIDE and 0.78 on DR HAGIS. Artery/vein scores are 0.66 on IOSTAR-AV, and disc segmentation achieves 0.94 in IDRID. Vascular morphology features measured from the AutoMorph segmentation map and expert annotation show good to excellent agreement. Conclusions: AutoMorph modules perform well even when external validation data show domain differences from training data (e.g., with different imaging devices). This fully automated pipeline can thus allow detailed, efficient, and comprehensive analysis of retinal vascular morphology on color fundus photographs. Translational Relevance: By making AutoMorph publicly available and open source, we hope to facilitate ophthalmic and systemic disease research, particularly in the emerging field of oculomics

    Clinician-Driven AI: Code-Free Self-Training on Public Data for Diabetic Retinopathy Referral

    Get PDF
    Importance: Democratizing artificial intelligence (AI) enables model development by clinicians with a lack of coding expertise, powerful computing resources, and large, well-labeled data sets. // Objective: To determine whether resource-constrained clinicians can use self-training via automated machine learning (ML) and public data sets to design high-performing diabetic retinopathy classification models. // Design, Setting, and Participants: This diagnostic quality improvement study was conducted from January 1, 2021, to December 31, 2021. A self-training method without coding was used on 2 public data sets with retinal images from patients in France (Messidor-2 [n = 1748]) and the UK and US (EyePACS [n = 58 689]) and externally validated on 1 data set with retinal images from patients of a private Egyptian medical retina clinic (Egypt [n = 210]). An AI model was trained to classify referable diabetic retinopathy as an exemplar use case. Messidor-2 images were assigned adjudicated labels available on Kaggle; 4 images were deemed ungradable and excluded, leaving 1744 images. A total of 300 images randomly selected from the EyePACS data set were independently relabeled by 3 blinded retina specialists using the International Classification of Diabetic Retinopathy protocol for diabetic retinopathy grade and diabetic macular edema presence; 19 images were deemed ungradable, leaving 281 images. Data analysis was performed from February 1 to February 28, 2021. // Exposures: Using public data sets, a teacher model was trained with labeled images using supervised learning. Next, the resulting predictions, termed pseudolabels, were used on an unlabeled public data set. Finally, a student model was trained with the existing labeled images and the additional pseudolabeled images. Main Outcomes and Measures: The analyzed metrics for the models included the area under the receiver operating characteristic curve (AUROC), accuracy, sensitivity, specificity, and F1 score. The Fisher exact test was performed, and 2-tailed P values were calculated for failure case analysis. // Results: For the internal validation data sets, AUROC values for performance ranged from 0.886 to 0.939 for the teacher model and from 0.916 to 0.951 for the student model. For external validation of automated ML model performance, AUROC values and accuracy were 0.964 and 93.3% for the teacher model, 0.950 and 96.7% for the student model, and 0.890 and 94.3% for the manually coded bespoke model, respectively. // Conclusions and Relevance: These findings suggest that self-training using automated ML is an effective method to increase both model performance and generalizability while decreasing the need for costly expert labeling. This approach advances the democratization of AI by enabling clinicians without coding expertise or access to large, well-labeled private data sets to develop their own AI models

    The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: Mock galaxy catalogues for the BOSS Final Data Release

    Get PDF
    We reproduce the galaxy clustering catalogue from the SDSS-III Baryon Oscillation Spectroscopic Survey Final Data Release (BOSS DR11 and DR12) with high fidelity on all relevant scales in order to allow a robust analysis of baryon acoustic oscillations and redshift space distortions. We have generated (6000) 12 288 MultiDark PATCHY BOSS (DR11) DR12 light cones corresponding to an effective volume of ~ 192 000 [h-1 Gpc]3 (the largest ever simulated volume), including cosmic evolution in the redshift range from 0.15 to 0.75. The mocks have been calibrated using a reference galaxy catalogue based on the halo abundance matching modelling of the BOSS DR11 and DR12 galaxy clustering data and on the data themselves. The production follows three steps. First, we apply the PATCHY code to generate a dark matter field and an object distribution including non-linear stochastic galaxy bias. Secondly, we run the halo/stellar distribution reconstruction HADRON code to assign masses to the various objects. This step uses the mass distribution as a function of local density and non-local indicators (i.e. tidal field tensor eigenvalues and relative halo exclusion separation for massive objects) from the reference simulation applied to the corresponding patchy dark matter and galaxy distribution. Finally, we apply the SUGAR code to build the light cones. The resulting MultiDarkPATCHY mock light cones reproduce the number density, selection function, survey geometry, and in general within 1s, for arbitrary stellar mass bins, the power spectrum up to k = 0.3 h Mpc-1, the two-point correlation functions down to a few Mpc scales, and the three-point statistics of the BOSS DR11 and DR12 galaxy samples.Fil: Kitaura, Francisco-Shu. Leibniz-Institut für Astrophysik Potsdam; AlemaniaFil: Rodriguez Torres, Sergio A.. Universidad Autónoma de Madrid; España. Consejo Superior de Investigaciones Científicas; EspañaFil: Chuang, Chia Hsun. Universidad Autónoma de Madrid; España. Consejo Superior de Investigaciones Científicas; EspañaFil: Zhao, Cheng. Tsinghua University; ChinaFil: Prada, Francisco. Consejo Superior de Investigaciones Científicas; España. Universidad Autónoma de Madrid; EspañaFil: Gil Marín, Héctor. University of Portsmouth; Reino UnidoFil: Guo, Hong. State University of Utah; Estados Unidos. Shanghai Astronomical Observatory; ChinaFil: Yepes, Gustavo. Universidad Autónoma de Madrid. Facultad de Ciencias; EspañaFil: Klypin, Anatoly. Universidad Autónoma de Madrid; España. Consejo Superior de Investigaciones Científicas; España. New Mexico State University; Estados UnidosFil: Scoccola, Claudia Graciela. Universidad Autónoma de Madrid; España. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata; Argentina. Instituto de Astrof{isica de Canarias; España. Universidad de La Laguna; EspañaFil: Tinker, Jeremy. University of New York; Estados UnidosFil: McBride, Cameron. Harvard-Smithsonian Center for Astrophysics; Estados UnidosFil: Reid, Beth. Lawrence Berkeley National Laboratory; Estados Unidos. University of California at Berkeley; Estados UnidosFil: Sánchez, Ariel G.. Max Planck Institut für Extraterrestrische Physik; AlemaniaFil: Salazar Albornoz, Salvador. Max Planck Institut für Extraterrestrische Physik; Alemania. Ludwig Maximilians Universitat; AlemaniaFil: Grieb, Jan Niklas. Max Planck Institut für Extraterrestrische Physik; Alemania. Ludwig Maximilians Universitat; AlemaniaFil: Vargas Magana, Mariana. Universidad Nacional Autónoma de México; MéxicoFil: Cuesta, Antonio J.. Universidad de Barcelona; EspañaFil: Neyrinck, Mark. University Johns Hopkins; Estados UnidosFil: Beutler, Florian. Lawrence Berkeley National Laboratory; Estados UnidosFil: Comparat, Johan. Universidad Autónoma de Madrid; España. Consejo Superior de Investigaciones Científicas; EspañaFil: Percival, Will J.. University of Portsmouth; Reino UnidoFil: Ross, Ashley. Ohio State University; Estados Unidos. University of Portsmouth; Reino Unid

    Friedreich ataxia patient tissues exhibit increased 5-hydroxymethylcytosine modification and decreased CTCF binding at the FXN locus

    Get PDF
    © 2013 Al-Mahdawi et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use,distribution, and reproduction in any medium, provided the original author and source are credited.This article has been made available through the Brunel Open Access Publishing Fund.Friedreich ataxia (FRDA) is caused by a homozygous GAA repeat expansion mutation within intron 1 of the FXN gene, which induces epigenetic changes and FXN gene silencing. Bisulfite sequencing studies have identified 5-methylcytosine (5 mC) DNA methylation as one of the epigenetic changes that may be involved in this process. However, analysis of samples by bisulfite sequencing is a time-consuming procedure. In addition, it has recently been shown that 5-hydroxymethylcytosine (5 hmC) is also present in mammalian DNA, and bisulfite sequencing cannot distinguish between 5 hmC and 5 mC.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement number 242193/EFACTS (CS), the Wellcome Trust [089757] (SA) and Ataxia UK (RMP) to MAP
    • …
    corecore